Goto

Collaborating Authors

 Eastern Norway


A Clarinetist, a High School Student, and Some Climate Deniers Write a Science Paper

Mother Jones

Don't miss this: Double your impact! We're able to stand strong because we're funded by readers like you. Support journalism that doesn't flinch. Don't miss this: Tomorrow is the final day of our $50,000 match We're able to stand strong because we're funded by readers like you. Support journalism that doesn't flinch.


How Pokémon Go is giving delivery robots an inch-perfect view of the world

MIT Technology Review

Niantic's AI spinout is training a new world model using 30 billion images of urban landmarks crowdsourced from players. Pokémon Go was the world's first augmented-reality megahit. Released in 2016 by the Google spinout Niantic, the AR twist on the juggernaut Pokémon franchise fast became a global phenomenon. From Chicago to Oslo to Enoshima, players hit the streets in the urgent hope of catching a Jigglypuff or a Squirtle or (with a huge amount of luck) an ultra-rare Galarian Zapdos hovering just out of reach, superimposed on the everyday world. "Five hundred million people installed that app in 60 days," says Brian McClendon, CTO at Niantic Spatial, an AI company that Niantic spun out in May last year. According to the video-game firm Scopely, which bought Pokémon Go from Niantic at the same time, the game still drew more than 100 million players in 2024, eight years after it launched.


Dirichlet Scale Mixture Priors for Bayesian Neural Networks

Arnstad, August, Rønneberg, Leiv, Storvik, Geir

arXiv.org Machine Learning

Neural networks are the cornerstone of modern machine learning, yet can be difficult to interpret, give overconfident predictions and are vulnerable to adversarial attacks. Bayesian neural networks (BNNs) provide some alleviation of these limitations, but have problems of their own. The key step of specifying prior distributions in BNNs is no trivial task, yet is often skipped out of convenience. In this work, we propose a new class of prior distributions for BNNs, the Dirichlet scale mixture (DSM) prior, that addresses current limitations in Bayesian neural networks through structured, sparsity-inducing shrinkage. Theoretically, we derive general dependence structures and shrinkage results for DSM priors and show how they manifest under the geometry induced by neural networks. In experiments on simulated and real world data we find that the DSM priors encourages sparse networks through implicit feature selection, show robustness under adversarial attacks and deliver competitive predictive performance with substantially fewer effective parameters. In particular, their advantages appear most pronounced in correlated, moderately small data regimes, and are more amenable to weight pruning. Moreover, by adopting heavy-tailed shrinkage mechanisms, our approach aligns with recent findings that such priors can mitigate the cold posterior effect, offering a principled alternative to the commonly used Gaussian priors.




PersonalSum: A User-Subjective Guided Personalized Summarization Dataset for Large Language Models

Neural Information Processing Systems

Models (LLMs) can sometimes surpass those annotated by experts, such as journalists, according to human evaluations. However, there is limited research on whether these generic summaries meet the individual needs of ordinary people.